May 2019
[Script run 2020-07-08 11:19:28]
## Warning: `as.tibble()` is deprecated as of tibble 2.0.0.
## Please use `as_tibble()` instead.
## The signature and semantics have changed, see `?as_tibble`.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.
In the tables that follow, reasons for exclusion and their counts are provided. FALSE indicates that a trial or participant is not excluded.
First, participants are excluded if they fail attention checks.
##
## attnCheckYear attnCheckYear, attnCheckMarker
## 2 2
## FALSE
## 33
Next, participants who failed to complete the entire experiment are removed. This is largely because the experiment failed to complete in Safari (form submission refreshed the page), and Safari also had a bug whereby the advice always displayed on the far left of the timeline, meaning that the manipulation was non-functional.
Next, outlying trials are removed.
##
## FALSE timeEnd
## 1754 16
Now outliers have been removed, participants who lost too many trials (>3) are also excluded.
##
## attnCheckYear attnCheckYear , safari
## 1 1
## attnCheckYear, attnCheckMarker FALSE
## 2 31
## outlyingTrials safari
## 1 1
We are now ready to construct a data frame of participant averages.
We can now check for participants who are outliers as a whole, and remove them.
##
## attnCheckYear
## 1
## attnCheckYear , safari , responseError
## 1
## attnCheckYear, attnCheckMarker
## 2
## FALSE
## 31
## outlyingTrials , responseCorrect
## 1
## safari
## 1
Lastly, we can check the debrief information for participants who appeared to guess the manipulation (one advisor agrees with them) and remove those participants.
##
## attnCheckYear
## 1
## attnCheckYear , safari , responseError
## 1
## attnCheckYear, attnCheckMarker
## 2
## FALSE
## 31
## outlyingTrials , responseCorrect
## 1
## safari
## 1
We also want to remove participants who had multiple attempts at the study. We’ll remove them if they have performed any core trials (trials with the experimental manipulation active), or if any questions they answer on the core trials have been answered on previous attempts at the task.
##
## attnCheckYear
## 1
## attnCheckYear , safari , responseError
## 1
## attnCheckYear, attnCheckMarker
## 2
## FALSE
## 31
## outlyingTrials , responseCorrect
## 1
## safari
## 1
Our final participant list consists of 31 participants who completed an average of 27.29 trials each. 14 of these received feedback on the task, while the remaining 17 did not.
First we offer a characterisation of the task, to provide the reader with a sense of how the participants performed.
The statistics for many of these are broken down as a cross-section of two factors, decision and feedback. Decision is a within-subjects variable, and indicates whether the judgement under consideration was the first (pre-advice) or last (post-advice) decision. Feedback is a between-subjects variable, and indicates whether the participant received feedback immediately following the last decision on a trial. Feedback allows participants to track the value of advice directly.
Note: “first” and “last” are used as terms simply because they arrange the factors into alphabetical order with no messing about. Other terms would work equally well (e.g. initial/final is common in the literature).
Participants offered estimates of the year in which various events took place. The correct answers were always between 1900 and 2000, although the timeline on which participants responded went from 1890 to 2010 in order to allow extra room for advice. Participants answered by dragging a marker onto a timeline. Markers of various widths were available for the participants to choose, with wider markers which covered more years being worth fewer points. Participants were informed that a correct answer was one in which the marker covered the year in which the event took place.
Three different markers were available:
| marker | years | points |
|---|---|---|
| thin | 3 | 9 |
| medium | 9 | 3 |
| wide | 27 | 1 |
These markers were used by the participants as described in the table below:
Marker usage summary table (means) for initial and final decisions
Shows mean marker usage proportion for final and initial decisions for each feedback condition. Columns with NA represent totals across that variable.
Data are aggregated within each participant before combination (and hence do not sum to 1). Except where otherwise mentioned, data presented will be in this manner - aggregations of individual participants’ means.
## Warning: `fun.y` is deprecated. Use `fun` instead.
Marker usage graph Shows the proportion of marker usage for each participant by decision and feedback status.
Responses are regarded as correct if the target year is included within the marker range.
## Warning: `fun.y` is deprecated. Use `fun` instead.
The error is calcualted as the distance from the centre of the answer marker to the correct year. It is thus possible for correct answers to have non-zero error, and it is likely that the error for correct answers scales with the marker size.
## Warning: `fun.y` is deprecated. Use `fun` instead.
## Warning: `fun.y` is deprecated. Use `fun` instead.
Points are scored only on correct trials. The points scored for a correct trial are equal to 27/marker width, meaning that the wider a marker is the fewer points are scored.
## Warning: `fun.y` is deprecated. Use `fun` instead.
We cannot calculate calibration using p(correct), because the marker width serves as the indicator for confidence and changing marker also changes p(correct) by increasing the range of values considered correct.
We can, however, use one of the following:
error at different marker widths as an indicator of the degree to which increased accuracy is associated with increased precision
p(correct|3yr), indicating whether the answer would have been correct had the 3yr marker been used centred in the same place
Firstly, we can consider the time taken for the entire trial.
Second we can look at the response time - the difference between the time the response is opened and the time the response is received.
## Warning: `fun.y` is deprecated. Use `fun` instead.
## Warning: `fun.y` is deprecated. Use `fun` instead.
We want to know how the advisors behave. They are programmed to be different, but the actual advice they can offer is limited by the circumstances of a trial. If, for instance, they are instructed to agree and be correct, this is only possible if the difference between the edge of the initial response marker and the correct answer is less than the advisor’s precision.
Advice consists in the placement of a marker on the timeline, similar to how participants make their decisions. Advice is classified according to two key properties:
Agreement occurs when there is at least one year of overlap between the participant’s marker and the advisor’s marker.
Accuracy. Advice is said to be accurate when the advisor’s marker touches the target year.
It is possible for advice to be both accurate and agreeing, to be one but not the other, or to be neither. Each of these cases are expected to occur within the study for most participants, according to most advice profiles.
Advice profiles determine the kinds of advice an advisor attempts to provide. They specify relative quantities of advice rules, and are selected from a exhaustible pool. In cases where the selected advice rules cannot be fulfilled (e.g.agree and be correct where the participant’s answer is too far from the correct answer), a fallback rule set is invoked.
The following advice types are available:
| name | description | fallback |
|---|---|---|
| Correct | The advisor gives a correct answer | none |
| Correctish | The advisor gives an answer sampled from a normalish distribution around the correct answer | none |
| Agree | The advisor gives an agreeing answer | none |
| Agreeish | The advisor gives an answer sampled from a normalish distribution around the participant’s answer | none |
| Correct Agree | The advisor gives advice which is both correct and agreeing | Agree |
| Correct Disagree | The advisor gives advice which is correct, but which does not agree | Correct |
| Disagree Reflected | The advisor gives advice which is the participant’s answer reflected in the correct answer, while disagreeing with the participant | Disagree Reversed |
| Disagree Reversed | The advisor gives advice which is the correct answer reflected in the participant’s answer, while disagreeing with the participant | always possible if Disagree Reflected is not |
The advisors appear in different orders across blocks. The first advisor (position 0) is introduced first, appears above the second, and on gives its advice first on each trial. The order is randomised each block, and thus on average should balance out to around 0.5 for each advisor.
The advice offered, both nominal and actual, should be equivalent between feedback conditions.
The nominal type of the advice is the advice selected for the advisor to give.The actual type of advice is the advice the advisor actually gave, i.e. allowing for fallbacks where the requested advice type could not be supplied.
Advisors are supposed to differ in their accuracy. This is expected to hold true whether accuracy is cast in terms of the proportion of advice which is correct or mean error. These values should also be stable between feedback conditions.
## Warning: `fun.y` is deprecated. Use `fun` instead.
Off-brand advice occurs when the advisor offers its minority advice type (i.e. disagreeReflected). This allows comparison of perception of advisors (based on their advice on previous trials) while controlling for differences in their actual advice (on the current trial).
Because both advisors use the same off-brand advice, there should be no noticable differences here.
## Warning: `fun.y` is deprecated. Use `fun` instead.
Advisor are also supposed to differ in their agreement rates.
## Warning: `fun.y` is deprecated. Use `fun` instead.
There should be no noticeable differences here.
## Warning: `fun.y` is deprecated. Use `fun` instead.
Distance is the continuous version of agreement - the difference between the centre of the advice and the centre of the initial estimate.
## Warning: `fun.y` is deprecated. Use `fun` instead.
There should be no noticeable differences here.
## Warning: `fun.y` is deprecated. Use `fun` instead.
The measure of influence is weight-on-advice. This is well-defined for values between 0 and 1 (trucated otherwise), and is \[\text{WoA} = (\text{final} - \text{inital}) / (\text{advice} - \text{initial})\] , or the degree to which the final decision moves towards the advised answer.
Influence is the primary outcome measure, and is thus expected to differ between advisors and feedback conditions.
## Warning: `fun.y` is deprecated. Use `fun` instead.
## Warning: `fun.y` is deprecated. Use `fun` instead.
## Warning: `fun.y` is deprecated. Use `fun` instead.
It’s good to keep a general eye on the distribution of weight-on-advice on a trial-by-trial basis.
## Warning: Ignoring unknown parameters: binwidth, bins, pad
Participants should reduce their error as a function of advice, and this is expected to be most pronounced for the Accurate advisors. Here we plot error reduction, which (unlike most of the following variables) is obtained with initial - final, as opposed to final - initial. This is because error is expected to be lower on most final decisions than initial decisions, and helpfully makes larger positive values indicative of better performance.
## Warning: `fun.y` is deprecated. Use `fun` instead.
## Warning: `fun.y` is deprecated. Use `fun` instead.
## Warning: `fun.y` is deprecated. Use `fun` instead.
Participants may benefit from advice in terms of the accuracy of their responses. Here we code a correct response ammended to an incorrect response as -1, responses whose correctness is unchanged as 0, and incorrect responses ammended to correct responses as 1.
## Warning: `fun.y` is deprecated. Use `fun` instead.
## Warning: `fun.y` is deprecated. Use `fun` instead.
Score changes are very straightforward - simply the points awarded for the final response minus the points awarded for the initial response, so positive values indicate improvements in score while negative values indicate the final response was less valuable than the initial response.
## Warning: `fun.y` is deprecated. Use `fun` instead.
## Warning: `fun.y` is deprecated. Use `fun` instead.
It can be helpful to get a sense of how much participants are adjusting their responses in general, i.e. the mean distance between their first and last responses.
## Warning: `fun.y` is deprecated. Use `fun` instead.
## Warning: `fun.y` is deprecated. Use `fun` instead.
## Warning: `fun.y` is deprecated. Use `fun` instead.
Marker widths are coded as 3, 2, and 1 for 3, 9, and 27 years respectively. Final - initial confidence gives the change.
## Warning: `fun.y` is deprecated. Use `fun` instead.
## Warning: `fun.y` is deprecated. Use `fun` instead.
Advisor agreement is calculated using the participant’s initial decision and the advice. In the same way, it is possible to calculate agreement between the advice and the participant’s final decision, and to examine whether this changes (i.e. whether the participant shifts to follow disagreeing advice).
## Warning: `fun.y` is deprecated. Use `fun` instead.
The hypotheses being tested here are:
Participants benefit from advice
when feedback is available
and when feedback is not available
Participants use feedback to determine the quality of advice, and thus put more weight on accurate advisors in the feedback condition compared to the no-feedback condition
Participants denied feedback use agreement as a proxy for accuracy, and thus put more weight on agreeing advisors in the no-feedback condition compared to the feedback condition
All hypotheses are tested using the trials on which only one advisor provided advice.
Participants should have lower error on their last decisions than on their first decisions.
t(30) = -8.45, p < .001, d = 0.98, BF = 5896686.63; M|last = 13.39 [11.91, 14.87], M|first = 17.87 [16.03, 19.72]
Supporting this primary result, participants should also score more points on their last answer than on their first answer, although given that they are allowed to change their confidence, they may sometimes miss the correct answer because they choose a thinner marker.
t(30) = 5.93, p < .001, d = 0.43, BF = 10661.07; M|last = 2.49 [1.73, 3.26], M|first = 1.65 [1.00, 2.31]
The above results could be because participants increase their confidence (i.e. not moving the centre of their marker and therefore not changing the error estimate, but changing the points scored). This can be assessed by looking for differences in correctness.
t(30) = 7.32, p < .001, d = 1.03, BF = 375952.16; M|last = 0.36 [0.31, 0.41], M|first = 0.23 [0.19, 0.27]
The above effect should hold for participants in the feedback condition.
t(13) = -6.52, p < .001, d = 1.37, BF = 1216.22; M|last = 14.78 [12.74, 16.82], M|first = 20.22 [17.69, 22.75]
And for score as the outcome:
t(13) = 4.76, p < .001, d = 1.20, BF = 90.94; M|last = 1.80 [1.47, 2.13], M|first = 1.17 [0.88, 1.45]
The effect should also hold for participants in the no-feedback condition.
t(16) = -5.76, p < .001, d = 0.82, BF = 851.30; M|last = 12.25 [10.11, 14.39], M|first = 15.94 [13.48, 18.40]
And for score as the outcome:
t(16) = 4.39, p < .001, d = 0.40, BF = 74.34; M|last = 3.06 [1.70, 4.43], M|first = 2.05 [0.85, 3.25]
Participants in the feedback condition should have higher weight on advice for the accurate advisor than participants in the no-feedback condition.
t(26.09) = 0.44, p = .663, d = 0.16, BF = 0.37; M|feedback = 0.64 [0.50, 0.79], M|¬feedback = 0.60 [0.49, 0.72]
The opposite pattern is expected for the agreeing advisor: feedback should result in lower weight on advice.
t(27.53) = -1.69, p = .102, d = 0.59, BF = 0.91; M|feedback = 0.37 [0.28, 0.47], M|¬feedback = 0.50 [0.37, 0.62]
We can also ask which advisor was preferred (if any) by participants in each condition:
Feedback: t(13) = -4.69, p < .001, d = 1.28, BF = 82.12; M|agr = 0.37 [0.28, 0.47], M|acc = 0.64 [0.50, 0.79]
No feedback: t(16) = -1.89, p = .077, d = 0.46, BF = 1.06; M|agr = 0.50 [0.37, 0.62], M|acc = 0.60 [0.49, 0.72]
The questions above can perhaps be more suitably answered using a heirachical Bayesian ANOVA analogue:
## Warning: `as_data_frame()` is deprecated as of tibble 2.0.0.
## Please use `as_tibble()` instead.
## The signature and semantics have changed, see `?as_tibble`.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_warnings()` to see where this warning was generated.
Because I don’t quite trust my usage of the BANOVA package above, we’re also looking at this using EZ ANOVA:
## Registered S3 methods overwritten by 'lme4':
## method from
## cooks.distance.influence.merMod car
## influence.merMod car
## dfbeta.influence.merMod car
## dfbetas.influence.merMod car
## Warning: Data is unbalanced (unequal N per group). Make sure you specified a
## well-considered value for the type argument to ezANOVA().
## $ANOVA
## Effect DFn DFd SSn SSd
## 1 (Intercept) 1 27 17.513358500 1.8837495
## 2 hasFeedback 1 27 0.025464688 1.8837495
## 3 firstAdvisor 1 27 0.240102055 1.8837495
## 5 advisor0idDescription 1 27 0.495737699 0.6102773
## 4 hasFeedback:firstAdvisor 1 27 0.002971187 1.8837495
## 6 hasFeedback:advisor0idDescription 1 27 0.102666358 0.6102773
## 7 firstAdvisor:advisor0idDescription 1 27 0.108960880 0.6102773
## 8 hasFeedback:firstAdvisor:advisor0idDescription 1 27 0.005482717 0.6102773
## F p p<.05 ges
## 1 251.02100400 3.397719e-15 * 0.875344691
## 2 0.36498834 5.507919e-01 0.010107075
## 3 3.44141068 7.453149e-02 0.087816657
## 5 21.93251627 7.135773e-05 * 0.165811621
## 4 0.04258637 8.380544e-01 0.001189904
## 6 4.54218347 4.232411e-02 * 0.039537347
## 7 4.82066682 3.689409e-02 * 0.041859929
## 8 0.24256736 6.263393e-01 0.002193517
##
## $aov
##
## Call:
## aov(formula = formula(aov_formula), data = data)
##
## Grand Mean: 0.5314824
##
## Stratum 1: pid
##
## Terms:
## hasFeedback firstAdvisor hasFeedback:firstAdvisor Residuals
## Sum of Squares 0.0281925 0.2401021 0.0029712 1.8837495
## Deg. of Freedom 1 1 1 27
##
## Residual standard error: 0.2641373
## 3 out of 6 effects not estimable
## Estimated effects may be unbalanced
##
## Stratum 2: pid:advisor0idDescription
##
## Terms:
## advisor0idDescription hasFeedback:advisor0idDescription
## Sum of Squares 0.4957377 0.0991407
## Deg. of Freedom 1 1
## firstAdvisor:advisor0idDescription
## Sum of Squares 0.1089609
## Deg. of Freedom 1
## hasFeedback:firstAdvisor:advisor0idDescription Residuals
## Sum of Squares 0.0054827 0.6102773
## Deg. of Freedom 1 27
##
## Residual standard error: 0.1503425
## Estimated effects may be unbalanced
We suspect clearer results will be obtained using offbrand trials only, but due to there being substantially fewer offbrand than onbrand trials the data are likely to be noisier.
## [1] "Dropping incomplete case pid = 9a0ec1f1"
## Warning: Converting "advisor0idDescription" to factor for ANOVA.
## Warning: Converting "firstAdvisor" to factor for ANOVA.
## Warning: Data is unbalanced (unequal N per group). Make sure you specified a
## well-considered value for the type argument to ezANOVA().
## $ANOVA
## Effect DFn DFd SSn SSd
## 1 (Intercept) 1 26 19.558527795 3.157422
## 2 hasFeedback 1 26 0.190613225 3.157422
## 3 firstAdvisor 1 26 0.507750930 3.157422
## 5 advisor0idDescription 1 26 0.335850589 2.153357
## 4 hasFeedback:firstAdvisor 1 26 0.303635900 3.157422
## 6 hasFeedback:advisor0idDescription 1 26 0.813852750 2.153357
## 7 firstAdvisor:advisor0idDescription 1 26 0.008925564 2.153357
## 8 hasFeedback:firstAdvisor:advisor0idDescription 1 26 0.027967110 2.153357
## F p p<.05 ges
## 1 161.0559907 1.200402e-12 * 0.786452458
## 2 1.5696172 2.214139e-01 0.034648177
## 3 4.1811086 5.112399e-02 0.087264463
## 5 4.0551164 5.449531e-02 0.059478058
## 4 2.5003099 1.259140e-01 0.054081482
## 6 9.8265948 4.233168e-03 * 0.132881898
## 7 0.1077688 7.453281e-01 0.001677831
## 8 0.3376796 5.661756e-01 0.005238516
##
## $aov
##
## Call:
## aov(formula = formula(aov_formula), data = data)
##
## Grand Mean: 0.5709426
##
## Stratum 1: pid
##
## Terms:
## hasFeedback firstAdvisor hasFeedback:firstAdvisor Residuals
## Sum of Squares 0.1850762 0.5077509 0.3036359 3.1574220
## Deg. of Freedom 1 1 1 26
##
## Residual standard error: 0.3484814
## 3 out of 6 effects not estimable
## Estimated effects may be unbalanced
##
## Stratum 2: pid:advisor0idDescription
##
## Terms:
## advisor0idDescription hasFeedback:advisor0idDescription
## Sum of Squares 0.3358506 0.8154516
## Deg. of Freedom 1 1
## firstAdvisor:advisor0idDescription
## Sum of Squares 0.0089256
## Deg. of Freedom 1
## hasFeedback:firstAdvisor:advisor0idDescription Residuals
## Sum of Squares 0.0279671 2.1533575
## Deg. of Freedom 1 26
##
## Residual standard error: 0.2877871
## Estimated effects may be unbalanced
We suspect participants responses may be driven by properties of the advice only, while the properties of the advisor are ignored (i.e. participants behave as if all trials are discrete). If this is the case, there will be no differences between regression lines for advisors when regressing predictors vs WoA.
Previous research indicates a u-shaped curve for distance x woa - first we check if this is appropriate.
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'
First we want to take a look at how the advisor questionnaire data break down.
## Warning: `fun.y` is deprecated. Use `fun` instead.
## No summary function supplied, defaulting to `mean_se()`
## No summary function supplied, defaulting to `mean_se()`
## No summary function supplied, defaulting to `mean_se()`
## No summary function supplied, defaulting to `mean_se()`
Next we want to see whether participants who rate an advisor highly are also more influenced by that advisor’s advice.
## `geom_smooth()` using formula 'y ~ x'
We can model some of the changes investigated above as a function of the properties of each of the advisors on the trial. These models can be thought of as constituting more specifc hypotheses about the mechanics underlying the updating of advice quality on the basis of experience.
As a sanity check, we can run the established models of advisor updating as a consequence of feedback in the feedback condition. In this model, advisors’ advice is adjusted as a consequence of the feedback received by an amount equal to a free learning rate parameter: \[\omega_a^{t+1} = \omega_a^t + \lambda f(e_{a}^t, v^t)\] where \(\omega_a^t\) is the credibility of advisor \(a\) on trial \(t\), \(\lambda\) is a learning rate parameter, \(f(x,y)\) is a fitness function for an estimate \(x\) and target value \(y\), \(e_a^t\) is the estimate provided by advisor \(a\) on trial \(t\), and \(v^t\) the target value on trial \(t\).
In the simplest case, \(f(x, y)\) takes values of -1 or +1 for incorrect and correct estimates, respectively, while in other cases it may be continuous, e.g. the reciprocal of the error.
Thanks as always to Nick Yeung and the other folks at the ACC Lab.
| Package | Citations |
|---|---|
| base | R Core Team (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/. |
| ez | Michael A. Lawrence (2016). ez: Easy Analysis and Visualization of Factorial Experiments. R package version 4.4-0. https://CRAN.R-project.org/package=ez |
| prettyMD | Matt Jaquiery (2020). prettyMD: Format Test Outputs with Markdown. R package version 0.1.4. |
| knitr | Yihui Xie (2020). knitr: A General-Purpose Package for Dynamic Report Generation in R. R package version 1.29.<br/><br/>Yihui Xie (2015) Dynamic Documents with R and knitr. 2nd edition. Chapman and Hall/CRC. ISBN 978-1498716963<br/><br/>Yihui Xie (2014) knitr: A Comprehensive Tool for Reproducible Research in R. In Victoria Stodden, Friedrich Leisch and Roger D. Peng, editors, Implementing Reproducible Computational Research. Chapman and Hall/CRC. ISBN 978-1466561595 |
| BANOVA | |
| BayesFactor | Richard D. Morey and Jeffrey N. Rouder (2018). BayesFactor: Computation of Bayes Factors for Common Designs. R package version 0.9.12-4.2. https://CRAN.R-project.org/package=BayesFactor |
| Matrix | Douglas Bates and Martin Maechler (2019). Matrix: Sparse and Dense Matrix Classes and Methods. R package version 1.2-18. https://CRAN.R-project.org/package=Matrix |
| coda | Martyn Plummer, Nicky Best, Kate Cowles and Karen Vines (2006). CODA: Convergence Diagnosis and Output Analysis for MCMC, R News, vol 6, 7-11 |
| lsr | Navarro, D. J. (2015) Learning statistics with R: A tutorial for psychology students and other beginners. (Version 0.5) University of Adelaide. Adelaide, Australia |
| curl | Jeroen Ooms (2019). curl: A Modern and Flexible Web Client for R. R package version 4.3. https://CRAN.R-project.org/package=curl |
| forcats | Hadley Wickham (2020). forcats: Tools for Working with Categorical Variables (Factors). R package version 0.5.0. https://CRAN.R-project.org/package=forcats |
| stringr | Hadley Wickham (2019). stringr: Simple, Consistent Wrappers for Common String Operations. R package version 1.4.0. https://CRAN.R-project.org/package=stringr |
| dplyr | Hadley Wickham, Romain François, Lionel Henry and Kirill Müller (2020). dplyr: A Grammar of Data Manipulation. R package version 1.0.0. https://CRAN.R-project.org/package=dplyr |
| purrr | Lionel Henry and Hadley Wickham (2020). purrr: Functional Programming Tools. R package version 0.3.4. https://CRAN.R-project.org/package=purrr |
| readr | Hadley Wickham, Jim Hester and Romain Francois (2018). readr: Read Rectangular Text Data. R package version 1.3.1. https://CRAN.R-project.org/package=readr |
| tidyr | Hadley Wickham and Lionel Henry (2020). tidyr: Tidy Messy Data. R package version 1.1.0. https://CRAN.R-project.org/package=tidyr |
| tibble | Kirill Müller and Hadley Wickham (2020). tibble: Simple Data Frames. R package version 3.0.1. https://CRAN.R-project.org/package=tibble |
| ggplot2 | H. Wickham. ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York, 2016. |
| tidyverse | Wickham et al., (2019). Welcome to the tidyverse. Journal of Open Source Software, 4(43), 1686, https://doi.org/10.21105/joss.01686 |
| testthat | Hadley Wickham. testthat: Get Started with Testing. The R Journal, vol. 3, no. 1, pp. 5–10, 2011 |
Matt Jaquiery is funded by a studentship from the Medical Research Council (reference 1943590) and the University of Oxford Department of Experimental Psychology (reference 17/18_MSD_661552).
## Time stamp: 2020-07-08 11:24:22
##
## Runtime
## user system elapsed
## 260.15 33.46 295.15
##
## R version 4.0.2 (2020-06-22)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 18362)
##
## Matrix products: default
##
## locale:
## [1] LC_COLLATE=English_United Kingdom.1252
## [2] LC_CTYPE=English_United Kingdom.1252
## [3] LC_MONETARY=English_United Kingdom.1252
## [4] LC_NUMERIC=C
## [5] LC_TIME=English_United Kingdom.1252
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] ez_4.4-0 prettyMD_0.1.4 knitr_1.29
## [4] BANOVA_1.1.7 BayesFactor_0.9.12-4.2 Matrix_1.2-18
## [7] coda_0.19-3 lsr_0.5 curl_4.3
## [10] forcats_0.5.0 stringr_1.4.0 dplyr_1.0.0
## [13] purrr_0.3.4 readr_1.3.1 tidyr_1.1.0
## [16] tibble_3.0.1 ggplot2_3.3.2 tidyverse_1.3.0
## [19] testthat_2.3.2
##
## loaded via a namespace (and not attached):
## [1] minqa_1.2.4 colorspace_1.4-1 ellipsis_0.3.1
## [4] rio_0.5.16 fs_1.4.1 rstudioapi_0.11
## [7] farver_2.0.3 rstan_2.19.3 MatrixModels_0.4-1
## [10] fansi_0.4.1 mvtnorm_1.1-1 lubridate_1.7.9
## [13] xml2_1.3.2 splines_4.0.2 jsonlite_1.7.0
## [16] nloptr_1.2.2.2 broom_0.5.6 dbplyr_1.4.4
## [19] rjags_4-10 compiler_4.0.2 httr_1.4.1
## [22] backports_1.1.7 assertthat_0.2.1 cli_2.0.2
## [25] htmltools_0.5.0 prettyunits_1.1.1 tools_4.0.2
## [28] gtable_0.3.0 glue_1.4.1 reshape2_1.4.4
## [31] Rcpp_1.0.4.6 carData_3.0-4 cellranger_1.1.0
## [34] vctrs_0.3.1 nlme_3.1-148 xfun_0.15
## [37] ps_1.3.3 openxlsx_4.1.5 lme4_1.1-23
## [40] rvest_0.3.5 lifecycle_0.2.0 runjags_2.0.4-6
## [43] gtools_3.8.2 statmod_1.4.34 MASS_7.3-51.6
## [46] scales_1.1.1 hms_0.5.3 parallel_4.0.2
## [49] inline_0.3.15 yaml_2.2.1 pbapply_1.4-2
## [52] gridExtra_2.3 loo_2.3.0 StanHeaders_2.21.0-5
## [55] stringi_1.4.6 highr_0.8 boot_1.3-25
## [58] pkgbuild_1.0.8 zip_2.0.4 rlang_0.4.6
## [61] pkgconfig_2.0.3 matrixStats_0.56.0 evaluate_0.14
## [64] lattice_0.20-41 labeling_0.3 processx_3.4.2
## [67] tidyselect_1.1.0 plyr_1.8.6 magrittr_1.5
## [70] R6_2.4.1 generics_0.0.2 DBI_1.1.0
## [73] pillar_1.4.4 haven_2.3.1 foreign_0.8-80
## [76] withr_2.2.0 mgcv_1.8-31 abind_1.4-5
## [79] modelr_0.1.8 crayon_1.3.4 car_3.0-8
## [82] rmarkdown_2.3 grid_4.0.2 readxl_1.3.1
## [85] data.table_1.12.8 blob_1.2.1 callr_3.4.3
## [88] reprex_0.3.0 digest_0.6.25 webshot_0.5.2
## [91] RcppParallel_5.0.2 stats4_4.0.2 munsell_0.5.0
## [94] viridisLite_0.3.0 kableExtra_1.1.0